Search Results: "rmb"

16 October 2022

Sven Hoexter: CentOS 9, stunnel, an openssl memory leak and a VirtualBox crash

tl;dr; OpenSSL 3.0.1 leaks memory in ssl3_setup_write_buffer(), seems to be fixed in 3.0.5 3.0.2. The issue manifests at least in stunnel and keepalived on CentOS 9. In addition I learned the hard way that running a not so recent VirtualBox version on Debian bullseye let to dh parameter generation crashing in libcrypto in bn_sqr8x_internal(). A recent rabbit hole I went down. The actual bug in openssl was nailed down and documented by Quentin Armitage on GitHub in keepalived My bugreport with all back and forth in the RedHat Bugzilla is #2128412. Act I - Hello stunnel, this is the OOMkiller Calling We started to use stunnel on Google Cloud compute engine instances running CentOS 9. The loadbalancer in front of those instances used a TCP health check to validate the backend availability. A day or so later the stunnel instances got killed by the OOMkiller. Restarting stunnel and looking into /proc/<pid>/smaps showed a heap segment growing quite quickly. Act II - Reproducing the Issue While I'm not the biggest fan of VirtualBox and Vagrant I've to admit it's quite nice to just fire up a VM image, and give other people a chance to recreate that setup as well. Since VirtualBox is no longer released with Debian/stable I just recompiled what was available in unstable at the time of the bullseye release, and used that. That enabled me now to just start a CentOS 9 VM, setup stunnel with a minimal config, grab netcat and a for loop and watch the memory grow. E.g. while true; do nc -z localhost 2600; sleep 1; done To my surprise, in addition to the memory leak, I also observed some crashes but did not yet care too much about those. Act III - Wrong Suspect, a Workaround and Bugreporting Of course the first idea was that something must be wrong in stunnel itself. But I could not find any recent bugreports. My assumption is that there are still a few people around using CentOS and stunnel, so someone else should probably have seen it before. Just to be sure I recompiled the latest stunnel package from Fedora. Didn't change anything. Next I recompiled it without almost all the patches Fedora/RedHat carries. Nope, no progress. Next idea: Maybe this is related to the fact that we do not initiate a TLS context after connecting? So we changed the test case from nc to openssl s_client, and the loadbalancer healthcheck from TCP to a TLS based one. Tada, a workaround, no more memory leaking. In addition I gave Fedora a try (they have Vagrant Virtualbox images in the "Cloud" Spin, e.g. here for Fedora 36) and my local Debian installation a try. No leaks experienced on both. Next I reported #2128412. Act IV - Crash in libcrypto and a VirtualBox Bug When I moved with the test case from the Google Cloud compute instance to my local VM I encountered some crashes. That morphed into a real problem when I started to run stunnel with gdb and valgrind. All crashes happened in libcrypto bn_sqr8x_internal() when generating new dh parameter (stunnel does that for you if you do not use static dh parameter). I quickly worked around that by generating static dh parameter for stunnel. After some back and forth I suspected VirtualBox as the culprit. Recompiling the current VirtualBox version (6.1.38-dfsg-3) from unstable on bullseye works without any changes. Upgrading actually fixed that issue. Epilog I highly appreciate that RedHat, with all the bashing around the future of CentOS, still works on community contributed bugreports. My kudos go to Clemens Lang. :) Now that the root cause is clear, I guess RedHat will push out a fix for the openssl 3.0.1 based release they have in RHEL/CentOS 9. Until that is available at least stunnel and keepalived are known to be affected. If you run stunnel on something public it's not that pretty, because already a low rate of TCP connections will result in a DoS condition.

12 April 2022

Sven Hoexter: Emulating Raspi2 like hardware with RaspiOS in 2022

Update of my notes from 2020.
# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget https://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2022-04-07/2022-04-04-raspios-bullseye-armhf-lite.img.xz
unxz 2022-04-04-raspios-bullseye-armhf-lite.img.xz
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-armhf-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 4G
# make sure we get a user account setup
echo "me:$(echo 'test123' openssl passwd -6 -stdin)" > userconf
sudo guestmount -a rasbian.qcow2 -m /dev/sda1 /mnt
sudo mv userconf /mnt
sudo guestunmount /mnt
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user me with password test123
sudo -i
# enable ssh
systemctl enable ssh
systemctl start ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2
Now I can login via ssh and start to play:
ssh me@localhost -p 5555

3 February 2022

Sven Hoexter: Suntime Calculation with Lua and the Great Gift of Open Source

tl;dr I ported a part of the python-suntime library to Lua to use it on OpenWRT and RutOS powered devices. suntime.lua There are those unremarkable things which let you pause for a moment, and realize what a great gift of our time open source software and open knowledge is. At some point in time someone figured out how to calculate the sunrise and sunset time on the current date for your location. Someone else wrote that up and probably again a different person published it on the internet. The Internet Archive preserved a copy of it so I can still link to it. Someone took this algorithm and published a code sample on StackOverflow, which was later on used by the SatAgro guys to create the python-suntime library. Now I could come along, copy the core function of this library, convert it within a few hours - mostly spent learning a bit of Lua, to a working script fulfilling my needs.

20 January 2022

Sven Hoexter: Running OpenWRT x86 in qemu

Sometimes it's nice for testing purpose to have the OpenWRT userland available locally. Since there is an x86 build available one can just run it within qemu.
wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
gunzip openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
qemu-img convert -f raw -O qcow2 openwrt-21.02.1-x86-64-generic-squashfs-combined.img openwrt-21.02.1.qcow2
qemu-img resize openwrt-21.02.1.qcow2 200M
qemu-system-x86_64 -M q35 \
  -drive file=openwrt-21.02.1.qcow2,id=d0,if=none,bus=0,unit=0 \
  -device ide-hd,drive=d0,bus=ide.0 -nic user,hostfwd=tcp::5556-:22
# you've to change the network configuration to retrieve an IP via
# dhcp for the lan bridge br-lan
vi /etc/config/network
  - change option proto 'static' to 'dhcp'
  - remove IP address and netmask setting
/etc/init.d/network restart
# now you should've an ip out of 10.0.2.0/24
ssh root@localhost -p 5556
# remember ICMP does not work but otherwise you should have
# IP networking available
opkg update
opkg install curl

15 October 2021

Sven Hoexter: ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display. Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:
xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary
Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log. From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:
$ lspci grep -E "VGA 3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

14 September 2021

Sven Hoexter: PV - Monitoring Envertech Microinverter via envertecportal.com

Some time ago I looked briefly at an Envertech data logger for small scale photovoltaic setups. Turned out that PV inverter are kinda unreliable, and you really have to monitor them to notice downtimes and defects. Since my pal shot for a quick win I've cobbled together another Python script to query the portal at www.envertecportal.com, and report back if the generated power is down to 0. The script is currently run on a vserver via cron and reports back via the system MTA. So yeah, you need to have something like that already at hand. Script and Configuration You've to provide your PV systems location with latitude and longitude so the script can calculate (via python3-suntime) the sunrise and sunset times. At the location we deal with we expect to generate some power at least from sunrise + 1h to sunet - 1h. That is tunable via the configuration option toleranceSeconds. Retrieving the stationId is a bit ugly because it's not provided via any API, instead it's rendered serverside into the website. So I just logged in on the portal and picked it up by looking into the page source. www.envertecportal.com API I guess this is some classic in the IoT land, but neither the documentation provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the Firefox Web Developer Tools are:
  1. Login https://www.envertecportal.com/apiaccount/login - POST, sent userName and pwd containing your login name and password. The response JSON is very explicit if your login was not successful and why.
  2. Store the session cookie called ASP.NET_SessionId for use on all subsequent requests.
  3. Retrieve station info https://www.envertecportal.com/ApiStations/getStationInfo - POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns a JSON with an object named Data. The field Power contains the currently generated power as a float with two digits (e.g. 0.01).
  4. Logout https://www.envertecportal.com/apiAccount/Logout - POST, sent ASP.NET_SessionId.
Some Surprises There were a few surprises, maybe they help others dealing with an Envertech setup.
  1. The portal truncates passwords at 16 chars.
  2. The "Forget Password?" function mails you back the password in plain text (that's how I learned about 1.).
  3. The login API endpoint reporting the exact reason why the login failed is somewhat out of fashion. Though this one is probably not a credential stuffing target because there is no money to make, so don't care.
  4. The data logger reports the data to www.envertecportal.com at port 10013.
  5. There is some checksuming done on the reported data, but the system is not replay safe. So you can sent it any valid data string at a later time and get wrong data recorded.
  6. People at forum.fhem.de decoded some values but could not figure out the checksuming so far.

14 June 2021

Fran ois Marier: Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies Here are all of the extra Debian packages I had to install on my server:
apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl
Then I enabled the CGI module in Apache:
a2enmod cgi
and un-commented the following in /etc/apache2/mods-available/mime.conf:
AddHandler cgi-script .cgi

Creating a separate user account Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:
adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):
git clone --bare git://feedingthecloud.branchable.com/ source.git
Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work. Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:
git branch -d setup
git remote rm origin
Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:
cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud
I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop. Finaly, I generated a new ssh key without a passphrase:
ssh-keygen -t ed25519
and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config While I started with the Branchable setup file, I changed the following things in it:
adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()
Then I created the destdir:
mkdir /var/www/blog
chown blog:blog /var/www/blog
and generated the initial copy of the blog as the blog user:
ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild
One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
    Include /etc/fmarier-org/blog-common
</VirtualHost>
<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
and the common config I put in /etc/fmarier-org/blog-common:
ServerAdmin webmaster@fmarier.org
DocumentRoot /var/www/blog
LogLevel core:info
CustomLog $ APACHE_LOG_DIR /blog-access.log combined
ErrorLog $ APACHE_LOG_DIR /blog-error.log
AddType application/rss+xml .rss
<Location /blog.cgi>
        Options +ExecCGI
</Location>
before enabling all of this using:
a2ensite blog
apache2ctl configtest
systemctl restart apache2.service
The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served. First of all, I enabled the HTTP/2 and Brotli modules:
a2enmod http2
a2enmod brotli
and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:
<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType $ TRANSFER_COMPRESSION  text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/rss+xml
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/xml
  </IfModule>
</IfDefine>
and replacing /etc/apache2/mods-available/deflate.conf with the following:
# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632
before enabling this new config:
a2enconf compression
Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion% REQUEST_URI s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 
<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>
Then I followed the Mozilla Observatory recommendations and enabled the following security headers:
Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"
Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure. I also used the Mozilla TLS config generator to improve the TLS config for my server. Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:
Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt
I also followed these instructions to create a sitemap for my blog with the following alias:
Alias /sitemap.xml /var/www/blog/sitemap/index.rss
Finally, I simplified a few error pages to save bandwidth:
ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/blog-error.log 
Based on that, I added a few redirects to point bots and users to the location of my RSS feed:
Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss
and to tell them to stop trying to fetch obsolete resources:
Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi
I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:
Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom
I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:
User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements There are a few things I'd like to improve on my current setup. The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for. Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:
[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"
but I'd like to also reject unsigned pushes. While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed. Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:
[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom
This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

2 June 2021

Sven Hoexter: pulseaudio/alsa and dynamic mic sensitivity in my browser

It's a gross hack but works for now. To prevent overly sensitive mic settings autotuned by the browser in web conferences, I currently edit as root /usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf. Change in [Element Capture] the setting volume from merge to 80. The config block as a whole looks like this:
[Element Capture]
switch = mute
volume = 80
override-map.1 = all
override-map.2 = all-left,all-right
Solution found at https://askubuntu.com/a/761103.

10 May 2021

Russell Coker: More EVM

This is another post about EVM/IMA which has it s main purpose providing useful web search results for problems. However if reading it on a planet feed inspires someone to play with EVM/IMA then that s good too, it s interesting technology. When using EVM/IMA in the Linux kernel if dmesg has errors like op=appraise_data cause=missing-HMAC the missing-HMAC means that the error code in the kernel source is INTEGRITY_NOLABEL which has a comment No security.evm xattr . You can check for the xattr on a file with the following command (this example has the security.evm xattr):
# getfattr -d -m - /etc/fstab 
getfattr: Removing leading '/' from absolute path names
# file: etc/fstab
security.evm=0sAwICqGOsfwCAvgE9y9OP74QxJ/I+3eOSF2n2dM51St98z/7LYHFd9rfGTvssvhTSYL9G8cTdRAH8ozggJu7VCzggW1REoTjnLcPeuMJsrMbW3DwVrB6ldDmJzyenLMjnIHmRDDeK309aRbLVn2ueJZ07aMDcSr+sxhOOAQ/GIW4SW8L1AKpKn4g=
security.ima=0sAT+Eivfxl+7FYI+Hr9K4sE6IieZ+
security.selinux="system_u:object_r:etc_t:s0"
If dmesg has errors like op=appraise_data cause=invalid-HMAC the invalid-HMAC means that the error code in the kernel source is INTEGRITY_FAIL which has a comment Invalid HMAC/signature . These errors are from the evm_verifyxattr() function in Linux kernel 5.11.14. The error evm: HMAC key is not set means that the evm key is not initialised, this means the key needs to be loaded into the kernel and EVM is initialised by the command echo 1 > /sys/kernel/security/evm (or possibly some equivalent from a utility like evmctl). When the key is loaded the kernel gives the message evm: key initialized and after that /sys/kernel/security/evm is read-only. If there is something wrong with the key the kernel gives the message evm: key initialization failed , it seems that the way to determine if your key is good is to try writing 1 to /sys/kernel/security/evm and see what happens. After that the command cat /sys/kernel/security/evm should return 3 . The Gentoo wiki has good documentation on how to create and load the keys which has to be done before initialising EVM [1]. I ll write more about that in another post.

21 April 2021

Sven Hoexter: bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.
2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied
You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:
cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf
Discussed upstream here.

9 January 2021

Thorsten Alteholz: My Debian Activities in December 2020

FTP master This month I only accepted 8 packages and like last month rejected 0. Despite the holidays 293 packages got accepted. Debian LTS This was my seventy-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 26h. During that time I did LTS uploads of: Unfortunately package slirp has the same version in Stretch and Buster. So I first had to upload slirp/1:1.0.17-11 to unstable, in order to be allowed to fix the CVE in Buster and to finally upload a new version to Stretch. Meanwhile the fix for Buster has been approved by the Release Team and I am waiting for the next point release now. I also prepared a debdiff for influxdb, which will result in DSA-4823-1 in January. As there appeared new CVEs for openjpeg2, I did not do an upload yet. This is planned for January now. Last but not least I did some days of frontdesk duties. Debian ELTS This month was the thirtieth ELTS month. During my allocated time I uploaded: As well as for LTS, I did not finish work on all CVEs of openjpeg2, so the upload is postponed to January. Last but not least I did some days of frontdesk duties. Unfortunately I also had to give back some hours. Other stuff This month I uploaded new upstream versions of: I fixed one or two bugs in: I improved packaging of: Some packages just needed a source upload: and there have been even some new packages: With these uploads I finished the libosmocom- and libctl-transitions. The Debian Med Advent Calendar was again really successful this year. There was no new record, but with 109, the second most number of bugs has been closed.
year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
2017 105
2018 81
2019 104
2020 109
Well done everybody who participated. It is really nice to see that Andreas is no longer a lone wolf.

23 December 2020

Sven Hoexter: Jenkins dynamically parameterized pipelins for terraform execution

Jenkins in the Ops space is in general already painful. Lately the deprecation of the multiple-scms plugin caused some headache, becaue we relied heavily on it to generate pipelines in a Seedjob based on structure inside secondary repositories. We kind of started from scratch now and ship parameterized pipelines defined in Jenkinsfiles in those secondary repositories. Basically that is the way it should be, you store the pipeline definition along with code you'd like to execute. In our case that is mostly terraform and ansible. Problem Directory structure is roughly "stage" -> "project" -> "service". We'd like to have one job pipeline per project, which dynamically reads all service folder names and offers those as available parameters. A service folder is the smallest entity we manage with terraform in a separate state file. Now Jenkins pipelines are by intention limited, but you can add some groovy at will if you whitelist the usage in Jenkins. You have to click through some security though to make it work. Jenkinsfile This is basically a commented version of the Jenkinsfile we copy now around as a template, to be manually adjusted per project.
// Syntax: https://jenkins.io/doc/book/pipeline/syntax/
// project name as we use it in the folder structure and job name
def TfProject = "myproject-I-dev"
// directory relative to the repo checkout inside the jenkins workspace
def jobDirectory = "terraform/dev/$ TfProject "
// informational string to describe the stage or project
def stageEnvDescription = "DEV"
/* Attention please if you rebuild the Jenkins instance consider the following:
- You've to run this job at least *thrice*. It first has to checkout the
repository, then you've to add permisions for the groovy part, and on
the third run you can gather the list of available terraform folder.
- As a safeguard the first first folder name is always the invalid string
"choose-one". That prevents accidential execution of a random project.
- If you add new terraform folder you've to run the "choose-one" dummy rollout so
the dynamic parameters pick up the new folder. */
/* Here we hardcode the path to the correct job workspace on the jenkins host, and
   discover the service folder list. We have to filter it slightly to avoid temporary folders created by Jenkins (like @tmp folders). */
List tffolder = new File("/var/lib/jenkins/jobs/terraform $ TfProject /workspace/$ jobDirectory ").listFiles().findAll   it.isDirectory() && it.name ==~ /(?i)[a-z0-9_-]+/  .sort()
/* ensure the "choose-one" dummy entry is always the first in the list, otherwise
   initial executions might execute something. By default the first parameter is
   used if none is selected */
tffolder.add(0,"choose-one")
pipeline  
    agent any
    /* Show a choice parameter with the service directory list we stored
       above in the variable tffolder */
    parameters  
        choice(name: "TFFOLDER", choices: tffolder)
     
    // Configure logrotation and coloring.
    options  
        buildDiscarder(logRotator(daysToKeepStr: "30", numToKeepStr: "100"))
        ansiColor("xterm")
     
    // Set some variables for terraform to pick up the right service account.
    environment  
        GOOGLE_CLOUD_KEYFILE_JSON = '/var/lib/jenkins/cicd.json'
        GOOGLE_APPLICATION_CREDENTIALS = '/var/lib/jenkins/cicd.json'
     
stages  
    stage('TF Plan')  
    /* Make sure on every stage that we only execute if the
       choice parameter is not the dummy one. Ensures we
       can run the pipeline smoothly for re-reading the
       service directories. */
    when   expression   params.TFFOLDER != "choose-one"    
    steps  
        /* Initialize terraform and generate a plan in the selected
           service folder. */
        dir("$ params.TFFOLDER ")  
        sh 'terraform init -no-color -upgrade=true'
        sh 'terraform plan -no-color -out myplan'
         
        // Read in the repo name we act on for informational output.
        script  
            remoteRepo = sh(returnStdout: true, script: 'git remote get-url origin').trim()
         
        echo "INFO: job *$ JOB_NAME * in *$ params.TFFOLDER * on branch *$ GIT_BRANCH * of repo *$ remoteRepo *"
     
     
    stage('TF Apply')  
    /* Run terraform apply only after manual acknowledgement, we have to
       make sure that the when     condition is actually evaluated before
       the input. Default is input before when. */
    when  
        beforeInput true
        expression   params.TFFOLDER != "choose-one"  
     
    input  
        message "Cowboy would you really like to run **$ JOB_NAME ** in **$ params.TFFOLDER **"
        ok "Apply $ JOB_NAME  to $ stageEnvDescription "
     
    steps  
        dir("$ params.TFFOLDER ")  
        sh 'terraform apply -no-color -input=false myplan'
         
     
     
 
    post  
            failure  
                // You can also alert to noisy chat platforms on failures if you like.
                echo "job failed"
             
         
job-dsl side of the story Having all those when conditions in the pipeline stages above allows us to create a dependency between successful Seedjob executions and just let that trigger the execution of the pipeline jobs. This is important because the Seedjob execution itself will reset all pipeline jobs, so your dynamic parameters are gone. By making sure we can re-execute the job, and doing that automatically, we still have up to date parameterized pipelines, whenever the Seedjob ran successfully. The job-dsl script looks like this:
import javaposse.jobdsl.dsl.DslScriptLoader;
import javaposse.jobdsl.plugin.JenkinsJobManagement;
import javaposse.jobdsl.plugin.ExecuteDslScripts;
def params = [
    // Defaults are repo: mycorp/admin, branch: master, jenkinsFilename: Jenkinsfile
    pipelineJobs: [
        [name: 'terraform myproject-I-dev', jenkinsFilename: 'terraform/dev/myproject-I-dev/Jenkinsfile', upstream: 'Seedjob'],
        [name: 'terraform myproject-I-prod', jenkinsFilename: 'terraform/prod/myproject-I-prod/Jenkinsfile', upstream: 'Seedjob'],
    ],
]
params.pipelineJobs.each   job ->
    pipelineJob(job.name)  
        definition  
            cpsScm  
                // assume admin and branch master as a default, look for Jenkinsfile
                def repo = job.repo ?: 'mycorp/admin'
                def branch = job.branch ?: 'master'
                def jenkinsFilename = job.jenkinsFilename ?: 'Jenkinsfile'
                scm  
                    git("ssh://git@github.com/$ repo .git", branch)
                 
                scriptPath(jenkinsFilename)
             
         
        properties  
            pipelineTriggers  
                triggers  
                    if(job.upstream)  
                        upstream  
                            upstreamProjects("$ job.upstream ")
                            threshold('SUCCESS')
                         
                     
                 
             
         
     
 
Disadvantages There are still a bunch of disadvantages you've to consider Jenkins Rebuilds are Painful In general we rebuild our Jenkins instances quite frequently. With the approach outlined here in place, you've to allow the groovy script execution after the first Seedjob execution, and then go through at least another round of run the job, allow permissions, run the job, until it's finally all up and running. Copy around Jenkinsfile Whenever you create a new project you've to copy around Jenkinsfiles for each and every stage and modify the variables at the top accordingly. Keep the Seedjob definitions and Jenkinsfile in Sync You not only have to copy the Jenkinsfile around, but you also have to keep the variables and names in sync with what you define for the Seedjob. Sadly the pipeline env-vars are not available outside of the pipeline when we execute the groovy parts. Kudos This setup was crafted with a lot of help by Michael and Eric.

21 December 2020

Sven Hoexter: docker buildx sugar - dumping results to disk

The latest docker 20.10.x release unlocks the buildx subcommands which allow for some sugar, like building something in a container and dumping the result to your local directory in one command. Dockerfile
FROM docker-registry.mycorp.com/debian-node:lts as builder
USER service
COPY . /opt/service
RUN cd /opt/service; npm install; npm run build
FROM scratch as dist
COPY --from=builder /opt/service/dist /
build with
docker buildx build --target=dist --output type=local,dest=$(pwd)/pages/ .
Here we build a page, copy the result with all assets from the /opt/service/dist directory to an empty image and dump it into the local pages directory.

20 December 2020

Sven Hoexter: Mock a Serial pty Device with socat

Another note to myself before I forget about this nifty usage of socat again. I was looking for something to mock a serial device, similar to a microcontroller which usually ends up as /dev/ttyACM0 and might output some text. What I found is a very helpful post on stackoverflow showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty, here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about sockets, but it's incredibly vesatile.

22 November 2020

Markus Koschany: My Free Software Activities in October 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in November) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java
pdfsam
Misc Debian LTS This was my 56. month as a paid contributor and I have been paid to work 20,75 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 29. month and I have been paid to work 15 hours on ELTS. Thanks for reading and see you next time.

14 October 2020

Sven Hoexter: Nice Helper to Sanitize File Names - sanity.pl

One of the most awesome helpers I carry around in my ~/bin since the early '00s is the sanity.pl script written by Andreas Gohr. It just recently came back to use when I started to archive some awesome Corona enforced live session music with youtube-dl. Update: Francois Marier pointed out that Debian contains the detox package, which has a similar functionality.

18 September 2020

Sven Hoexter: Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard. The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master. gh functions and aliases
# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"
#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"
### github support functions, most invoked with a PR ID as $1
#primary function to review PRs
function ghs  
    gh pr view $ 1 
    gh pr checks $ 1 
    gh pr diff $ 1 
 
# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc  
    if git status   grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v   grep push   grep -oE 'myorg-[a-z]+')/myteam"
 
# merge a PR and update master if we're not in a different branch
function ghm  
    gh pr merge -d -r $ 1 
    if [[ "$(git rev-parse --abbrev-ref HEAD)" == "master" ]]; then
        git pull
    fi
 
# get an overview over the files changed in a PR
function ghf  
    gh pr diff $ 1    diffstat -l
 
# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink  
    local repo="$(git remote -v   grep -E "github.+push"   cut -d':' -f 2   cut -d'.' -f 1)"
    echo "https://github.com/$ repo /commit/$ 1 "
 
wtfutil I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:
github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github
The -label:dependencies is used here to filter out dependabot PRs in the dashboard. Workflow Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard. Security Considerations The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

8 September 2020

Sven Hoexter: Cloudflare Bot Management, MITM Boxes and TLS 1.3

This is just a "warn your brothers" post for those who use Cloudflare Bot Management, and have customers which use MITM boxes to break up TLS 1.3 connections. Be aware that right now some heuristic rules in the Cloudflare Bot Management score TLS 1.3 requests made by some MITM boxes with 1 - which equals "we're 99.99% sure that this is none human browser traffic". While technically correct - the TLS connection hitting the Cloudflare Edge node is not established by a browser - that does not help your customer if you block those requests. If you do something like blocking requests with a BM score of 1 at the Cloudflare Edge, you might want to reconsider that at the moment and sent a captcha challenge instead. While that is not a lot nicer, and still pisses people off, you might find a balance there between protecting yourself and still having some customers. I've a confirmation for this happening with Cisco WSA, but it's likely to be also the case with other vendors. Breaking up TLS 1.2 seems to be stealthy enough in those appliances that it's not detected, so this issue creeps in with more enterprises rolling out modern browser. You can now enter youself here a rant about how bad the client-server internet of 2020 is, and how bad it is that some of us rely on Cloudflare, and that they have accumulated a way too big market share. But the world is as it is. :(

24 August 2020

Sven Hoexter: google cloud buster images without python 2

Update I have to stand corrected. noahm@ wrote me, because the Debian Cloud Image maintainer only ever included python explicitly in Azure images. The most likely explanation for the change in the Google images is that Google just ported the last parts of their own software to python 3, and subsequently removed python 2. With some relieve one can just conclude it's only our own fault that we did not build our own images, which include all our own dependencies. Take it as reminder to always build your own images. Always. Be it VMs or docker. Build your own image. Original Post Fun in the morning, we realized that the Debian Cloud image builds dropped python 2 and that propagated to the Google provided Debian/buster images. So in case you use something like ansible, and so far assumed python 2 as the default interpreter, and installed additional python 2 modules to support ansible modules, you now have to either install python 2 again or just move to python 3k. We just try to suffer it through now, and set interpreter_python = auto in our ansible.cfg to anticipate the new default behaviour, which is planned for ansible 2.12. See also https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html Other lesson to learn here: The GCE Debian stable images are not stable. Blends in nicely with this rant, though it's not 100% a Google Cloud foul this time.

14 August 2020

Sven Hoexter: Retrieve WLAN PSK via nmcli

Note to myself so I do not have to figure that out every few month when I've to dig out a WLAN PSK from my existing configuration. Step 1: Figure out the UUID of the network:
$ nmcli con show
NAME                  UUID                                  TYPE      DEVICE          
br-59d010130b86       d8672d3d-7cf6-484f-9ef8-e6ec3e73bef7  bridge    br-59d010130b86 
FRITZ!Box 7411        1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2  wifi      wlp4s0
[...]
Step 2: Request to view the PSK for this network based on the UUID
$ nmcli --show-secrets --fields 802-11-wireless-security.psk con show '1ed1cec1-f586-4e75-ba6d-c9f6f4cba6e2'
802-11-wireless-security.psk:           0815471123420511111

Next.

Previous.